15 - Artificial Intelligence II [ID:9243]
50 von 692 angezeigt

Welcome back to this week's AI.

We've been talking about sequential decision problems.

Basically the idea is that we look at agents, full agents, meaning decision problems, not

only just modelling the world, but also taking decisions in worlds where you want to take

the aspect of time seriously. Meaning worlds that actually change in a time

scale that is commensurate with your deliberation and action time scales.

You can't just take a time out, think a bit and then do something.

Something else will be happening while you're doing that.

Which means that actually you need to take time into account because the world, the environment

isn't static while you're acting.

We've looked at two things.

We've looked at Markov decision problems, builds on Markov processes, Markov processes

as something where we're just modelling, not taking decisions.

You've looked at Markov decision problems last week with Dennis where the decision aspect

comes to that.

What we're going to do today is we're going to rid ourselves of the assumption here,

which is that the world is fully observable.

We're actually transitioning from Markov decision problems to partially observable

Markov decision problems.

That's quite a mouthful, so we're going to say POMDPs.

That's essentially what we're doing.

The techniques here are relatively close to what we've scratched the surface on in planning.

Planning was also something where we're using search in a fully observable and deterministic

world.

That takes time into account.

Remember we had, in a sense, a time-slized model.

In planning, we took time into account by having these add and delete lists of facts.

What facts already tells you, it's a deterministic world.

We actually don't need a belief model.

We have a world model.

We know what's happening.

We can observe everything, and by deterministic actions, we can plan ahead.

Planning is really the very simple case.

It's kind of that it is fully observable.

It is kind of similar to MDPs, only that MDPs allow uncertainty and a utility to the picture.

If you add uncertainty and utility to the picture, then you're kind of getting this

kind of mixture of techniques from MDPs and so on.

If you still add, that's what we're going to do today, is if you have an uncertain sensing,

then you end up with POMDPs.

The last thing we're really going to say about POMDPs is that the world is a POMDP.

Well, not surprisingly.

We've added everything that's good and expensive to the mix.

We've added time.

We've added uncertainty.

We've added uncertain sensing.

Everything that we've kind of abstracted away last semester and in the beginning of this

semester, we're adding back.

That's kind of going to be the eventual agent design.

I just want to remind you of this kind of similarity to planning and game playing and

so on that is going to pop up all over the place.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:20:23 Min

Aufnahmedatum

2018-06-06

Hochgeladen am

2018-06-06 20:41:56

Sprache

en-US

Dieser Kurs beschäftigt sich mit den Grundlagen der Künstlichen Intelligenz (KI), insbesondere mit Techniken des Schliessens unter Unsicherheit, des maschinellen Lernens und dem Sprachverstehen. 
Der Kurs baut auf der Vorlesung Künstliche Intelligenz I vom Wintersemester auf und führt diese weiter. 

Lernziele und Kompetenzen
Fach- Lern- bzw. Methodenkompetenz

  • Wissen: Die Studierenden lernen grundlegende Repräsentationsformalismen und Algorithmen der Künstlichen Intelligenz kennen.

  • Anwenden: Die Konzepte werden an Beispielen aus der realen Welt angewandt (bungsaufgaben).

  • Analyse: Die Studierenden lernen über die Modellierung in der Maschine menschliche Intelligenzleistungen besser einzuschätzen.

Einbetten
Wordpress FAU Plugin
iFrame
Teilen